245 research outputs found

    Nondeterministic Interactive Refutations for Nearest Boolean Vector

    Get PDF
    Most n-dimensional subspaces ? of ?^m are ?(?m)-far from the Boolean cube {-1, 1}^m when n 0. How hard is it to certify that the Nearest Boolean Vector (NBV) is at least ? ?m far from a given random ?? Certifying NBV instances is relevant to the computational complexity of approximating the Sherrington-Kirkpatrick Hamiltonian, i.e. maximizing x^TAx over the Boolean cube for a matrix A sampled from the Gaussian Orthogonal Ensemble. The connection was discovered by Mohanty, Raghavendra, and Xu (STOC 2020). Improving on their work, Ghosh, Jeronimo, Jones, Potechin, and Rajendran (FOCS 2020) showed that certification is not possible in the sum-of-squares framework when m ? n^1.5, even with distance ? = 0. We present a non-deterministic interactive certification algorithm for NBV when m ? n log n and ? ? 1/mn^1.5. The algorithm is obtained by adapting a public-key encryption scheme of Ajtai and Dwork

    Downward Self-Reducibility in TFNP

    Get PDF
    A problem is downward self-reducible if it can be solved efficiently given an oracle that returns solutions for strictly smaller instances. In the decisional landscape, downward self-reducibility is well studied and it is known that all downward self-reducible problems are in PSPACE. In this paper, we initiate the study of downward self-reducible search problems which are guaranteed to have a solution - that is, the downward self-reducible problems in TFNP. We show that most natural PLS-complete problems are downward self-reducible and any downward self-reducible problem in TFNP is contained in PLS. Furthermore, if the downward self-reducible problem is in TFUP (i.e. it has a unique solution), then it is actually contained in UEOPL, a subclass of CLS. This implies that if integer factoring is downward self-reducible then it is in fact in UEOPL, suggesting that no efficient factoring algorithm exists using the factorization of smaller numbers

    Fine-Grained Cryptography: A New Frontier?

    Get PDF
    Fine-grained cryptography is concerned with adversaries that are only moderately more powerful than the honest parties. We will survey recent results in this relatively underdeveloped area of study and examine whether the time is ripe for further advances in it

    PPP-Completeness and Extremal Combinatorics

    Get PDF

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Leveraging Peer Feedback to Improve Visualization Education

    Full text link
    Peer review is a widely utilized pedagogical feedback mechanism for engaging students, which has been shown to improve educational outcomes. However, we find limited discussion and empirical measurement of peer review in visualization coursework. In addition to engagement, peer review provides direct and diverse feedback and reinforces recently-learned course concepts through critical evaluation of others' work. In this paper, we discuss the construction and application of peer review in a computer science visualization course, including: projects that reuse code and visualizations in a feedback-guided, continual improvement process and a peer review rubric to reinforce key course concepts. To measure the effectiveness of the approach, we evaluate student projects, peer review text, and a post-course questionnaire from 3 semesters of mixed undergraduate and graduate courses. The results indicate that course concepts are reinforced with peer review---82% reported learning more because of peer review, and 75% of students recommended continuing it. Finally, we provide a road-map for adapting peer review to other visualization courses to produce more highly engaged students

    Quantum Transformations

    Get PDF
    We show that the stationary quantum Hamilton-Jacobi equation of non-relativistic 1D systems, underlying Bohmian mechanics, takes the classical form with q\partial_q replaced by q^\partial_{\hat q} where dq^=dq1β2d\hat q={dq\over \sqrt{1-\beta^2}}. The β2\beta^2 term essentially coincides with the quantum potential that, like VEV-E, turns out to be proportional to a curvature arising in projective geometry. In agreement with the recently formulated equivalence principle, these ``quantum transformations'' indicate that the classical and quantum potentials deform space geometry.Comment: 19 pages, LaTeX. Extended version to be published in Phys. Lett.

    The power of negations in cryptography

    Get PDF
    The study of monotonicity and negation complexity for Bool-ean functions has been prevalent in complexity theory as well as in computational learning theory, but little attention has been given to it in the cryptographic context. Recently, Goldreich and Izsak (2012) have initiated a study of whether cryptographic primitives can be monotone, and showed that one-way functions can be monotone (assuming they exist), but a pseudorandom generator cannot. In this paper, we start by filling in the picture and proving that many other basic cryptographic primitives cannot be monotone. We then initiate a quantitative study of the power of negations, asking how many negations are required. We provide several lower bounds, some of them tight, for various cryptographic primitives and building blocks including one-way permutations, pseudorandom functions, small-bias generators, hard-core predicates, error-correcting codes, and randomness extractors. Among our results, we highlight the following. Unlike one-way functions, one-way permutations cannot be monotone. We prove that pseudorandom functions require logn − O(1) negations (which is optimal up to the additive term). We prove that error-correcting codes with optimal distance parameters require logn − O(1) negations (again, optimal up to the additive term). We prove a general result for monotone functions, showing a lower bound on the depth of any circuit with t negations on the bottom that computes a monotone function f in terms of the monotone circuit depth of f. This result addresses a question posed by Koroth and Sarma (2014) in the context of the circuit complexity of the Clique problem

    Efficient Lossy Trapdoor Functions based on the Composite Residuosity Assumption

    Get PDF
    THIS REPORT IS REPLACED BY REPORT 2009/590
    corecore